Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Single image super-resolution method based on residual shrinkage network in real complex scenes
Ying LI, Chao HUANG, Chengdong SUN, Yong XU
Journal of Computer Applications    2023, 43 (12): 3903-3910.   DOI: 10.11772/j.issn.1001-9081.2022111697
Abstract180)   HTML2)    PDF (3309KB)(101)       Save

There are very few paired high and low resolution images in the real world. The traditional single image Super-Resolution (SR) methods typically use pairs of high-resolution and low-resolution images to train models, but these methods use the way of synthetizing dataset to obtain training set, which only consider bilinear downsampling as image degradation process. However, the image degradation process in the real word is complex and diverse, and traditional image super-resolution methods have poor reconstruction performance when facing real unknown degraded images. Aiming at those problems, a single image super-resolution method was proposed for real complex scenes. Firstly, high- and low-resolution images were captured by the camera with different focal lengths, and these images were registered as image pairs to form a dataset CSR(Camera Super-Resolution dataset) of various scenes. Secondly, to simulate the image degradation process in the real world as much as possible, the image degradation model was improved by the parameter randomization of degradation factors and the nonlinear combination degradation. Besides, the dataset of high- and low-resolution image pairs and the image degradation model were combined to synthetize training set. Finally, as the degradation factors were considered in the dataset, residual shrinkage network and U-Net were embedded into the benchmark model to reduce the redundant information caused by degradation factors in the feature space as much as possible. Experimental results indicate that compared with the BSRGAN (Blind Super-Resolution Generative Adversarial Network) method, under complex degradation conditions, the proposed method improves the PSNR by 0.7 dB and 0.14 dB, and improves SSIM by 0.001 and 0.031 respectively on the RealSR and CSR test sets. The proposed method has better objective indicators and visual effect than the existing methods on complex degradation datasets.

Table and Figures | Reference | Related Articles | Metrics
Multi-robot task allocation algorithm combining genetic algorithm and rolling scheduling
Fuqin DENG, Huanzhao HUANG, Chaoen TAN, Lanhui FU, Jianmin ZHANG, Tinlun LAM
Journal of Computer Applications    2023, 43 (12): 3833-3839.   DOI: 10.11772/j.issn.1001-9081.2022121916
Abstract364)   HTML6)    PDF (2617KB)(211)       Save

The purpose of research on Multi-Robot Task Allocation (MRTA) is to improve the task completion efficiency of robots in smart factories. Aiming at the deficiency of the existing algorithms in dealing with large-scale multi-constrained MRTA, an MRTA Algorithm Combining Genetic Algorithm and Rolling Scheduling (ACGARS) was proposed. Firstly, the coding method based on Directed Acyclic Graph (DAG) was adopted in genetic algorithm to efficiently deal with the priority constraints among tasks. Then, the prior knowledge was added to the initial population of genetic algorithm to improve the search efficiency of the algorithm. Finally, a rolling scheduling strategy based on task groups was designed to reduce the scale of the problem to be solved, thereby solving large-scale problems efficiently. Experimental results on large-scale problem instances show that compared with the schemes generated by Constructive Heuristic Algorithm (CHA), MinInterfere Algorithm (MIA), and Genetic Algorithm with Penalty Strategy (GAPS), the scheme generated by the proposed algorithm has the average order completion time shortened by 30.02%, 16.86% and 75.65% respectively when the number of task groups is 20, which verifies that the proposed algorithm can effectively shorten the average waiting time of orders and improve the efficiency of multi-robot task allocation.

Table and Figures | Reference | Related Articles | Metrics
Clustering-based hyperlink prediction
Pengfei QI, Lihua ZHOU, Guowang DU, Hao HUANG, Tong HUANG
Journal of Computer Applications    2020, 40 (2): 434-440.   DOI: 10.11772/j.issn.1001-9081.2019101730
Abstract399)   HTML1)    PDF (2588KB)(309)       Save

Hyperlink prediction aims to utilize inherent properties of observed network to reproduce the missing links in the network. Existing hyperlink prediction algorithms often make predictions based on entire network, and some link types with insufficient training samples data may be missed, resulting in imcomplete link types to be detected. To address this problem, a clustering-based hyperlink prediction algorithm named C-CMM was proposed. Firstly, the dataset was divided into clusters, and then the model was constructed for each cluster to perform hyperlink prediction. The proposed algorithm can make full use of the information contained in the observation samples of each cluster, and widen the coverage range of the prediction results. Experimental results on three real-world datasets show that the proposed algorithm outperforms a great number of state-of-the-art link prediction algorithms in prediction accuracy and efficiency, and has the prediction coverage more comprehensive.

Table and Figures | Reference | Related Articles | Metrics
News topic mining method based on weighted latent Dirichlet allocation model
LI Xiangdong BA Zhichao HUANG Li
Journal of Computer Applications    2014, 34 (5): 1354-1359.   DOI: 10.11772/j.issn.1001-9081.2014.05.1354
Abstract399)      PDF (969KB)(485)       Save

To solve the problems such as low accuracy and poor interpretability of traditional news topic mining, a new method was proposed based on weighted Latent Dirichlet Allocation (LDA) that combined with the information structure characters of the news. Firstly, the vocabulary weights were improved from different angles and the composite weights were built, the more expressive words were got by extending the process of feature items generated by the LDA model. Secondly, the Category Distinguish Word (CDW) method was used to optimize the word order of the generated result, which could reduce the noise and the ambiguity of the topics and improve the interpretability of the topics. Finally, according to the mathematical characteristics of the probability distribution model of the topics, the topics were quantified in terms of the contribution degree from the documents to the topics and the topics weight probability to get the hot topics. The simulation results show that the false negative rate and false positive rate of the weighted LDA model drop by an average of 1.43% and 0.16% compared with the traditional LDA model, and the minimum standard price drops by an average of 2.68%. It confirms the feasibility and effectiveness of this method.

Reference | Related Articles | Metrics
Fine-grained protection domain model in a process and its implementation
hao huang
Journal of Computer Applications   
Abstract2014)      PDF (837KB)(826)       Save
A fine-grained protection domains method was proposed to address the problem of dynamically changing a process’s capabilities. According to a process’s different access mode of its address space and system resources in its different executing phases, this model partitions it into multiple protection domains. Then it sets up access mode of address space for each of them, which makes it feasible to resist code injection attacks. Meanwhile, it integrates Mandatory Access Control (MAC) framework into it to provide the access control of system resources, which meets the security requirement of the system.
Related Articles | Metrics